81 research outputs found
A survey on fault-models for QoS studies of service-oriented systems
This survey paper presents an overview of the fault-models available to the
researcher who wants to parameterise system-models in order to study Quality-
of-Service (QoS) properties of systems with service-oriented architecture. The
concept of a system-model subsumes the whole spectrum between abstract
mathematical models and testbeds based on actual implementations. Fault-
models, on the other hand, are parameters to system-models. They introduce
faults and disturbances into the system-model, thereby allowing the study of
QoS under realistic conditions. In addition to a survey of existing fault-
models, the paper also provides a discussion of available fault-classification
schemes
NetemCG – IP packet-loss injection using a continuous-time Gilbert model
Injection of IP packet loss is a versatile method for emulating real-world
network conditions in performance studies. In order to reproduce realistic
packet-loss patterns, stochastic fault-models are used. In this report we
desribe our implementation of a Linux kernel module using a Continuous-Time
Gilbert Model for packet-loss injection
Efficient Image Stitching through Mobile Offloading
AbstractImage stitching is the task of combining images with overlapping parts to one big image. It needs a sequence of complex computation steps, especially the execution on a mobile device can take long and consume a lot of energy. Mobile offloading may alleviate those problems as it aims at improving performance and saving energy when executing complex applications on mobile devices. In this paper we investigate to which extent mobile offloading may improve the performance and energy efficiency of image stitching on mobile devices. We demonstrate our approach by stitching two or four images, but the process can be easily extended to an arbitrary number of images.We study three methods to offload parts of the computation to a resourceful server and evaluate them using several metrics. For the first offloading strategy all contributing images are sent, processed and the combined image is returned. For the second strategy images are offloaded, but not all stitching steps are executed on the remote server, and a smaller XML file is returned to the mobile client. The XML file contains a homography information which is needed by the mobile device to perform the last stitching step, the combination of the images. For the third strategy the images are transformed into grey scale before being transmitted to the server and an XML file is returned. The considered metrics are the execution time, the size of data to be transmitted and the memory usage. We find that the first strategy achieves the lowest total execution time but it requires more data to be transmitted than both the other strategies
10292 Abstracts Collection and Summary -- Resilience Assessment and Evaluation
From July 18 to July 23, 2010 the Dagstuhl Seminar 10292 ``Resilience Assessment and Evaluation \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Gossip routing, percolation, and restart in wireless multi-hop networks
Route and service discovery in wireless multi-hop networks applies flooding or
gossip routing to disseminate and gather information. Since packets may get
lost, retransmissions of lost packets are required. In many protocols the
retransmission timeout is fixed in the protocol specification. In this
technical report we demonstrate that optimization of the timeout is required
in order to ensure proper functioning of flooding schemes. Based on an
experimental study, we apply percolation theory and derive analytical models
for computing the optimal restart timeout. To the best of our knowledge, this
is the first comprehensive study of gossip routing, percolation, and restart
in this context
Stochastic models for dependable services
In this paper we investigate the use of stochastic models for analysing service-oriented systems. We propose an iterative hybrid approach using system measurements, testbed observations as well as formal models to derive a quantitative model of service-based systems that allows us to evaluate the effectiveness of the restart method in such systems. In cases where one is fortunate enough as to have access to a real system for measurements the obtained data often is lacking statistical significance or knowledge of the system is not sufficient to explain the data. A testbed may then be preferable as it allows for long experiment series and provides full control of the system's configuration. In order to provide meaningful data the testbed must be equipped with fault-injection using a suitable fault-model and an appropriate load model. We fit phase-type distributions to the data obtained from the testbed in order to represent the observed data in a model that can be used e.g. as a service process in a queueing model of our service-oriented system. The queueing model may be used to analyse different restart policies, buffer size or service disciplines. Results from the model can be fed into the testbed and provide it with better fault and load models thus closing the modelling loop
Detection of solidification crack formation in laser beam welding videos of sheet metal using neural networks
Laser beam welding has become widely applied in many industrial fields in recent years. Solidification cracks remain one of the most common welding faults that can prevent a safe welded joint. In civil engineering, convolutional neural networks (CNNs) have been successfully used to detect cracks in roads and buildings by analysing images of the constructed objects. These cracks are found in static objects, whereas the generation of a welding crack is a dynamic process. Detecting the formation of cracks as early as possible is greatly important to ensure high welding quality. In this study, two end-to-end models based on long short-term memory and three-dimensional convolutional networks (3D-CNN) are proposed for automatic crack formation detection. To achieve maximum accuracy with minimal computational complexity, we progressively modify the model to find the optimal structure. The controlled tensile weldability test is conducted to generate long videos used for training and testing. The performance of the proposed models is compared with the classical neural network ResNet-18, which has been proven to be a good transfer learning model for crack detection. The results show that our models can detect the start time of crack formation earlier, while ResNet-18 only detects cracks during the propagation stage
Strain Prediction Using Deep Learning during Solidification Crack Initiation and Growth in Laser Beam Welding of Thin Metal Sheets
The strain field can reflect the initiation time of solidification cracks during the welding process. The traditional strain measurement is to first obtain the displacement field through digital image correlation (DIC) or optical flow and then calculate the strain field. The main disadvantage is that the calculation takes a long time, limiting its suitability to real-time applications. Recently, convolutional neural networks (CNNs) have made impressive achievements in computer vision. To build a good prediction model, the network structure and dataset are two key factors. In this paper, we first create the training and test sets containing welding cracks using the controlled tensile weldability (CTW) test and obtain the real strain fields through the Lucas–Kanade algorithm. Then, two new networks using ResNet and DenseNet as encoders are developed for strain prediction, called StrainNetR and StrainNetD. The results show that the average endpoint error (AEE) of the two networks on our test set is about 0.04, close to the real strain value. The computation time could be reduced to the millisecond level, which would greatly improve efficiency
Dynamic decision making for candidate access point selection
Abstract. In this paper, we solve the problem of candidate access point selection in 802.11 networks, when there is more than one access point available to a station. We use the QBSS (quality of service enabled basic service set) Load Element of the new WLAN standard 802.11e as prior information and deploy a decision making algorithm based on reinforcement learning. We show that using reinforcement learning, wireless devices can reach more efficient decisions compared to static methods of decision making which opens the way to a more autonomic communication environment. We also present how the reinforcement learning algorithm reacts to changing situations enabling self adaptation
- …